Honolulu
Evaluating Large Language Models for Anxiety and Depression Classification using Counseling and Psychotherapy Transcripts
Sun, Junwei, Ma, Siqi, Fan, Yiran, Washington, Peter
University of Hawaii at Manoa, Honolulu, HI, USA *Correspondence should be sent to: pyw@hawaii.edu These authors contributed equally to this work. Abstract We aim to evaluate the efficacy of traditional machine learning and large language models (LLMs) in classifying anxiety and depression from long conversational transcripts. We fine-tuned both established transformer models (BERT, RoBERTa, Longformer) and more recent large models (Mistral-7B), trained a Support Vector Machine with feature engineering, and assessed GPT models through prompting. We observe that state-ofthe-art models fail to enhance classification outcomes compared to traditional machine learning methods.
#ICML2023 invited talk: Shakir Mohamed on ML with social purpose
The 40th International Conference on Machine Learning (ICML) took place Honolulu, Hawai'i from 23-29 July 2023. There were four invited talks as part of the programme, and in this post we summarise the presentation by Shakir Mohamed โ "Machine learning with social purpose". In a talk of three interwoven parts, Shakir's aim was to encourage the amplification and acceleration of work on machine learning with social purpose. He is passionate about using machine learning to contribute to overcoming some of the global challenges that we face, and, as well as demonstrating some of his research in this space, he provided guidance on how researchers can widen their horizons and consider the social implications of their work. Modelling of weather and climate can have a big impact on society, with such models often providing the basis for decisions taken by policy makers.
#ICML2023 tweet round-up
The 40th International Conference on Machine Learning (ICML) took place last week in Honolulu, Hawaiสปi. As well as four invited talks, the programme boasted oral and poster presentations, affinity events, tutorials and workshops. Find out what the participants got up to over the course of the conference. Can't wait for our first invited speaker talks by the inimitable @MarzyehGhassemi and @shakir_za on Tuesday! pic.twitter.com/tNDi7RNIUt Amazing group from #LatinXinAI hiking the Makapu'u Point Lighthouse Trail to kick off our social events at @icmlconf #ICML2023 @_LXAI pic.twitter.com/cO6dKAz6x8
Congratulations to the #ICML2023 outstanding paper award winners
This year's International Conference on Machine Learning (ICML) is taking place in Honolulu, Hawai'i from 23-29 July. The winners of the outstanding paper awards for 2023 have now been announced. This paper introduces an interesting approach that aims to address the challenge of obtaining a learning rate free optimal bound for non-smooth stochastic convex optimization. The authors propose a novel method that overcomes the limitations imposed by traditional learning rate selection in optimizing such problems. This research makes a valuable and practical contribution to the field of optimization.
What's coming up at #ICML2023?
This year's International Conference on Machine Learning (ICML) will take place in Honolulu, Hawai'i from 23-29 July. As well as four invited talks, the programme boasts oral and poster presentations, affinity events, tutorials and workshops. The tutorials will take place on Monday 24 July. The workshops will take place on Friday 28 and Saturday 29 July. You can find out more about the conference here.
'Video games open us to the whole spectrum of human emotions': novelist Gabrielle Zevin on Tomorrow, and Tomorrow, and Tomorrow
Games have always been a part of writer Gabrielle Zevin's life. Her first experience, she recalls, was playing Pac-Man at the Honolulu hotel where her grandmother ran a jewellery store. "I was about three years old at the time and I remember thinking, wouldn't it just be perfect if I wasn't limited to a single quarter โฆ if I could just keep playing this game for ever and ever?" Now 44, the veteran author has written her first novel about games. Tomorrow, and Tomorrow, and Tomorrow is the story of two programmers, Sam and Sadie, who set up a studio in the mid-1990s and over the course of a decade, make interesting games while their lives and relationships entwine in complex, often heartbreaking ways.
Honolulu police used a robot dog to police a homeless plague camp
Law enforcement agencies are just loving spending tax dollars on Boston Dynamics' horrifying robo-dog, Spot, to test out in increasingly tasteless, abjectly dystopian scenarios. Recently, the NYPD proudly trotted out its own $94,000 quadrupedal dog-bot for street patrols, only to terminate its contract with Spot's makers barely two months later following New Yorkers' collective "Fuck this shit" response. Now, renewed inquiries are detailing somehow even more horrifying usages -- Honolulu police employed their own $150,045 federally funded Spot to "take body temperatures, disinfect, and patrol the city's homeless quarantine encampment" during the COVID-19 pandemic. Costly thermometer -- "As for its use helping Honolulu combat COVID-19, the city's spending data says Spot was purchased to take people's temperatures at HPD's tent city for homeless people," reported the Honolulu Civil Beat back in January, "In other words, its ostensible use is as a thermometer, according to the city's spending justification, though HPD says it can do more." "The only question the city council asked of HPD [during a January hearing] was whether the robot could be used to crack down on Honolulu's fireworks problem," added Motherboard in an update earlier today.
From WEF19 to AAAI19: Reflections on the Way
While on my way from the beautiful snows of Davos to the warmth at AAAI19 in Honolulu, I've been reflecting about events and discussions at the 2019 World Economic Forum (WEF19). I found that energy and passion were high at WEF19. I especially enjoyed my 1:1 conversations with leaders from industry, government, academia, and civil society. Advances in AI and their influences seemed to pervade conversations and presentations at WEF19. It was inspiring to see the great enthusiasm about advances in AI.
Amazon team taps millions of Alexa interactions to reduce NLP error rate
Developers have to collect thousands of voice samples and annotate them by hand, a process that often takes weeks. That's why researchers at Amazon's Alexa division pursued transfer learning, which leverages a neural network -- i.e., layers of mathematical functions that mimic neurons in the brain -- trained on a large dataset of previously annotated samples to bootstrap training in a new domain with sparse data. In a newly published paper ("Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents"), Alexa AI scientists describe a technique that taps millions of unannotated interactions with Amazon's voice assistant to reduce errors by 8 percent. They'll present the fruit of their labor at the Association for the Advancement of Artificial Intelligence (AAAI) in Honolulu, Hawaii later this year. These interactions were used to train an AI system to generate embeddings -- numerical representations of words -- such that words with similar functions were grouped closely together.